4 research outputs found

    Container-based Cloud Virtual Machine benchmarking

    Get PDF
    This research was pursued under the EPSRC grant, EP/K015745/1, ‘Working Together: Constraint Programming and Cloud Computing,’ an Erasmus Mundus Master’s scholarship and an Amazon Web Services Education Research grant.With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed. In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-definedportion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first modeas a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.Postprin

    DocLite: A Docker-Based Lightweight Cloud Benchmarking Tool

    Get PDF
    This research was pursued under the EPSRC grant, EP/K015745/1, a Royal Society Industry Fellowship, an Erasmus Mundus Master’s scholarship and an AWS Education Research grant.Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks.The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.Postprin

    An Anti-Plagiarism Add-on For WebCAT

    Get PDF
    Plagiarism is a major problem in every discipline and Computer Science courses are no different. It is very common for students to submit their peers programming assignment as their own. This practice is unfair and also halts the learning process of the students who choose to copy. This research investigates the performance of various software plagiarism detection tools such as MOSS, JPlag and Plaggie. Controlled changes were made to a code file, and the sensitiveness of the various tools to those changes was determined. Plaggie with its algorithm of tokenisation followed by string comparison was found to have acceptable performance for our tests. It is also open source whereas the other tools are proprietary and web based and we decided to incorporate Plaggie with Web-CAT. Web-CAT is a flexible, automated grading system designed to process computer programming assignments. It serves as a learning environment for software testing tasks and helps automatically assess student assignments. The developed system was tested using submissions to a real class assignment and also by a variety of potential future users in a number of tests and the feedback received was very positive. In addition the tests presented a number of possible future enhancements
    corecore